我们开发了一个框架,用于在线环境中使用有效的覆盖范围保证构建不确定性集,其中基础数据分布可以急剧(甚至对手)随着时间的推移而发生巨大变化。我们提出的技术非常灵活,因为它可以与任何在线学习算法集成,需要最低限度的实施工作和计算成本。我们方法比现有替代方案的关键优势(也基于共形推断)是我们不需要将数据分为培训和保持校准集。这使我们能够以完全在线的方式拟合预测模型,并利用最新的观察结果来构建校准的不确定性集。因此,与现有技术相反,(i)我们构建的集合可以迅速适应分布的新变化; (ii)我们的过程不需要在每个时间步骤进行改装。使用合成和现实世界的基准数据集,我们证明了理论的有效性以及提案对现有技术的提高绩效。为了证明所提出的方法的更大灵活性,我们展示了如何为多出输出回归问题构造有效的间隔,而以前的顺序校准方法由于不切实际的计算和内存需求而无法处理。
translated by 谷歌翻译
We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability. Our work is composed of two components. First, we use a deep generative model to learn a representation of the response that has a unimodal distribution. Existing multiple-output quantile regression approaches are effective in such cases, so we apply them on the learned representation, and then transform the solution to the original space of the response. This process results in a flexible and informative region that can have an arbitrary shape, a property that existing methods lack. Second, we propose an extension of conformal prediction to the multivariate response setting that modifies any method to return sets with a pre-specified coverage level. The desired coverage is theoretically guaranteed in the finite-sample case for any distribution. Experiments conducted on both real and synthetic data show that our method constructs regions that are significantly smaller compared to existing techniques.
translated by 谷歌翻译
The Covid-19 pandemic induced a vast increase in adolescents diagnosed with eating disorders and hospitalized due to eating disorders. This immense growth stemmed partially from the stress of the pandemic but also from increased exposure to content that promotes eating disorders via social media, which, within the last decade, has become plagued by pro-eating disorder content. This study aimed to create a deep learning model capable of determining whether a given social media post promotes eating disorders based solely on image data. Tweets from hashtags that have been documented to promote eating disorders along with tweets from unrelated hashtags were collected. After prepossessing, these images were labeled as either pro-eating disorder or not based on which Twitter hashtag they were scraped from. Several deep-learning models were trained on the scraped dataset and were evaluated based on their accuracy, F1 score, precision, and recall. Ultimately, the vision transformer model was determined to be the most accurate, attaining an F1 score of 0.877 and an accuracy of 86.7% on the test set. The model, which was applied to unlabeled Twitter image data scraped from "#selfie", uncovered seasonal fluctuations in the relative abundance of pro-eating disorder content, which reached its peak in the summertime. These fluctuations correspond not only to the seasons, but also to stressors, such as the Covid-19 pandemic. Moreover, the Twitter image data indicated that the relative amount of pro-eating disorder content has been steadily rising over the last five years and is likely to continue increasing in the future.
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
Image segmentation is a fundamental task in computer vision. Data annotation for training supervised methods can be labor-intensive, motivating unsupervised methods. Some existing approaches extract deep features from pre-trained networks and build a graph to apply classical clustering methods (e.g., $k$-means and normalized-cuts) as a post-processing stage. These techniques reduce the high-dimensional information encoded in the features to pair-wise scalar affinities. In this work, we replace classical clustering algorithms with a lightweight Graph Neural Network (GNN) trained to achieve the same clustering objective function. However, in contrast to existing approaches, we feed the GNN not only the pair-wise affinities between local image features but also the raw features themselves. Maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly, without requiring additional post-processing steps. We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training our image segmentation GNN. Additionally, we use the Correlation-Clustering (CC) objective to perform clustering without defining the number of clusters ($k$-less clustering). We apply the proposed method for object localization, segmentation, and semantic part segmentation tasks, surpassing state-of-the-art performance on multiple benchmarks.
translated by 谷歌翻译
Generative models are becoming ever more powerful, being able to synthesize highly realistic images. We propose an algorithm for taming these models - changing the probability that the model will produce a specific image or image category. We consider generative models that are powered by normalizing flows, which allows us to reason about the exact generation probability likelihood for a given image. Our method is general purpose, and we exemplify it using models that generate human faces, a subdomain with many interesting privacy and bias considerations. Our method can be used in the context of privacy, e.g., removing a specific person from the output of a model, and also in the context of de-biasing by forcing a model to output specific image categories according to a given target distribution. Our method uses a fast fine-tuning process without retraining the model from scratch, achieving the goal in less than 1% of the time taken to initially train the generative model. We evaluate qualitatively and quantitatively, to examine the success of the taming process and output quality.
translated by 谷歌翻译
This paper presents The Shared Task on Euphemism Detection for the Third Workshop on Figurative Language Processing (FigLang 2022) held in conjunction with EMNLP 2022. Participants were invited to investigate the euphemism detection task: given input text, identify whether it contains a euphemism. The input data is a corpus of sentences containing potentially euphemistic terms (PETs) collected from the GloWbE corpus (Davies and Fuchs, 2015), and are human-annotated as containing either a euphemistic or literal usage of a PET. In this paper, we present the results and analyze the common themes, methods and findings of the participating teams
translated by 谷歌翻译
我们建议第一个通过对弱的微型计算机进行深入学习的实时语义细分的系统,例如Raspberry Pi Zero Zero V2(其价格\ 15美元)附加到玩具无人机上。特别是,由于Raspberry Pi的重量不到$ 16 $,并且其大小是信用卡的一半,因此我们可以轻松地将其连接到普通的商业DJI Tello玩具器中(<\ $ 100,<90克,98 $ \ \时间$ 92.5 $ \ times $ 41毫米)。结果是可以从板载单眼RGB摄像头(无GPS或LIDAR传感器)实时检测和分类对象的自动无人机(无笔记本电脑或人类)。伴侣视频展示了这款Tello无人机如何扫描实验室的人(例如使用消防员或安全部队)以及在实验室外的空停车位。现有的深度学习解决方案要么在这种物联网设备上实时计算要么太慢,要么提供不切实际的质量结果。我们的主要挑战是设计一个系统,该系统在网络,深度学习平台/框架,压缩技术和压缩比的众多组合中占有最好的选择。为此,我们提供了一种有效的搜索算法,旨在找到最佳组合,从而导致网络运行时间与其准确性/性能之间的最佳权衡。
translated by 谷歌翻译
本文提出了2022年访问量的挑战的最终结果。 OOV竞赛介绍了一个重要方面,而光学角色识别(OCR)模型通常不会研究,即,在培训时对看不见的场景文本实例的识别。竞赛编制了包含326,385张图像的公共场景文本数据集的集合,其中包含4,864,405个场景文本实例,从而涵盖了广泛的数据分布。形成了一个新的独立验证和测试集,其中包括在训练时出词汇量不超出词汇的场景文本实例。竞争是在两项任务中进行的,分别是端到端和裁剪的文本识别。介绍了基线和不同参与者的结果的详尽分析。有趣的是,在新研究的设置下,当前的最新模型显示出显着的性能差距。我们得出的结论是,在此挑战中提出的OOV数据集将是要探索的重要领域,以开发场景文本模型,以实现更健壮和广义的预测。
translated by 谷歌翻译
故障自然是随机的,而大多数人造系统,尤其是计算机都可以确定地工作。这需要将概率理论与数学逻辑,自动机和切换电路理论联系起来。本文通过量子信息理论提供了这种连接,这是量子物理学遵守概率定律的直观方法。在本文中,我们提供了一种新的方法,用于计算使用基于门的量子计算机开关电路的诊断。该方法是基于将代表叠加错误的量子位放置的想法,并同时诊断出所有的量子,通常是指数级的。我们从经验上将用于诊断的量子算法与基于SAT和模型计数的方法进行比较。对于组合电路的基准,我们在估计故障的真实概率方面建立了不到百分之一的误差。
translated by 谷歌翻译